In the field of artificial intelligence (AI), while large language models (LLMs) excel at processing natural language, they often struggle with complex reasoning tasks. These tasks typically require multi-step reasoning, domain-specific knowledge, or effective integration of external tools. To overcome these limitations, researchers have been exploring how to enhance LLM capabilities through the use of external tools. Traditional enhancement methods usually necessitate fine-tuning or additional training of the models, which limits their adaptability and flexibility for tasks.